Goto

Collaborating Authors

 Sleep


A Generalized Adaptive Joint Learning Framework for High-Dimensional Time-Varying Models

Chen, Baolin, Ran, Mengfei

arXiv.org Machine Learning

In modern biomedical and econometric studies, longitudinal processes are often characterized by complex time-varying associations and abrupt regime shifts that are shared across correlated outcomes. Standard functional data analysis (FDA) methods, which prioritize smoothness, often fail to capture these dynamic structural features, particularly in high-dimensional settings. This article introduces Adaptive Joint Learning (AJL), a hierarchical regularization framework designed to integrate functional variable selection with structural changepoint detection in multivariate time-varying coefficient models. Unlike standard simultaneous estimation approaches, we propose a theoretically grounded two-stage screening-and-refinement procedure. This framework first synergizes adaptive group-wise penalization with sure screening principles to robustly identify active predictors, followed by a refined fused regularization step that effectively borrows strength across multiple outcomes to detect local regime shifts. We provide a rigorous theoretical analysis of the estimator in the ultra-high-dimensional regime (p >> n). Crucially, we establish the sure screening consistency of the first stage, which serves as the foundation for proving that the refined estimator achieves the oracle property-performing as well as if the true active set and changepoint locations were known a priori. A key theoretical contribution is the explicit handling of approximation bias via undersmoothing conditions to ensure valid asymptotic inference. The proposed method is validated through comprehensive simulations and an application to Sleep-EDF data, revealing novel dynamic patterns in physiological states.


A robust generalizable device-agnostic deep learning model for sleep-wake determination from triaxial wrist accelerometry

Montazeri, Nasim, Yang, Stone, Luszczynski, Dominik, Zhang, John, Gurve, Dharmendra, Centen, Andrew, Goubran, Maged, Lim, Andrew

arXiv.org Artificial Intelligence

Study Objectives: Wrist accelerometry is widely used for inferring sleep-wake state. Previous works demonstrated poor wake detection, without cross-device generalizability and validation in different age range and sleep disorders. We developed a robust deep learning model for to detect sleep-wakefulness from triaxial accelerometry and evaluated its validity across three devices and in a large adult population spanning a wide range of ages with and without sleep disorders. Methods: We collected wrist accelerometry simultaneous to polysomnography (PSG) in 453 adults undergoing clinical sleep testing at a tertiary care sleep laboratory, using three devices. We extracted features in 30-second epochs and trained a 3-class model to detect wake, sleep, and sleep with arousals, which was then collapsed into wake vs. sleep using a decision tree. To enhance wake detection, the model was specifically trained on randomly selected subjects with low sleep efficiency and/or high arousal index from one device recording and then tested on the remaining recordings. Results: The model showed high performance with F1 Score of 0.86, sensitivity (sleep) of 0.87, and specificity (wakefulness) of 0.78, and significant and moderate correlation to PSG in predicting total sleep time (R=0.69) and sleep efficiency (R=0.63). Model performance was robust to the presence of sleep disorders, including sleep apnea and periodic limb movements in sleep, and was consistent across all three models of accelerometer. Conclusions: We present a deep model to detect sleep-wakefulness from actigraphy in adults with relative robustness to the presence of sleep disorders and generalizability across diverse commonly used wrist accelerometers.



DreamCatcher: A Wearer-aware Sleep Event Dataset Based on Earables in Non-restrictive Environments

Neural Information Processing Systems

Widely available earbuds equipped with sensors (also known as earables) can be combined with a sleep event detection algorithm to offer a convenient alternative to laborious clinical tests for individuals suffering from sleep disorders. Although various solutions utilizing such devices have been proposed to detect sleep events, they ignore the fact that individuals often share sleeping spaces with roommates or couples. To address this issue, we introduce DreamCatcher, the first publicly available dataset for wearer-aware sleep event algorithm development on earables.



NeuroLingua: A Language-Inspired Hierarchical Framework for Multimodal Sleep Stage Classification Using EEG and EOG

Samaee, Mahdi, Yazdi, Mehran, Massicotte, Daniel

arXiv.org Artificial Intelligence

We propose NeuroLingua, a language - inspired framework that conceptualizes sleep as a structured physiological language. Each 30 - second epoch is decomposed into overlapping 3 - second subwindows ("tokens") using a CNN - based tokenizer, enabling hierarchical temporal modeling through dual - level Transformers: intra - segment encoding of local dependencies and inter - segment integration across seven consecutive epochs (3.5 minutes) for extended context. Modality - specific embeddings from EEG and EOG channels are fused via a Graph Convolutional Network, facilitating robust multimodal integration. NeuroLingua is evaluated on the Sleep - EDF Expanded and ISRUC - Sleep datasets, achieving state - of - the - art results on Sleep - EDF (85.3% accuracy, 0.800 macro F1, and 0.796 Cohen's κ), and competitive performance on ISRUC (81.9% accuracy, 0.802 macro F1, and 0.755 κ), matching or exceeding published baselines in overall and per - class metrics. The architecture's attentio n mechanisms enhance the detection of clinically relevant sleep microevents, providing a principled foundation for future interpretability, explainability and causal inference in sleep research. By framing sleep as a compositional language, NeuroLingua uni fies hierarchical sequence modeling and multimodal fusion, advancing automated sleep staging toward more transparent and clinically meaningful applications. Index Terms -- Sleep staging, EEG, EOG, Polysomnography, Deep learning, Hierarchical sequence modeling, Multimodal fusion, Transformers, Graph neural networks, Interpretability, Explainability, Causal inference.


ActiTect: A Generalizable Machine Learning Pipeline for REM Sleep Behavior Disorder Screening through Standardized Actigraphy

Bertram, David, Ophey, Anja, Röttgen, Sinah, Kufer, Konstantin, Fink, Gereon R., Kalbe, Elke, Hansen, Clint, Maetzler, Walter, Kapsecker, Maximilian, Reimer, Lara M., Jonas, Stephan, Damgaard, Andreas T., Bertelsen, Natasha B., Skjaerbaek, Casper, Borghammer, Per, Groenewald, Karolien, Ratti, Pietro-Luca, Hu, Michele T., Moreau, Noémie, Sommerauer, Michael, Bozek, Katarzyna

arXiv.org Artificial Intelligence

Isolated rapid eye movement sleep behavior disorder (iRBD) is a major prodromal marker of $α$-synucleinopathies, often preceding the clinical onset of Parkinson's disease, dementia with Lewy bodies, or multiple system atrophy. While wrist-worn actimeters hold significant potential for detecting RBD in large-scale screening efforts by capturing abnormal nocturnal movements, they become inoperable without a reliable and efficient analysis pipeline. This study presents ActiTect, a fully automated, open-source machine learning tool to identify RBD from actigraphy recordings. To ensure generalizability across heterogeneous acquisition settings, our pipeline includes robust preprocessing and automated sleep-wake detection to harmonize multi-device data and extract physiologically interpretable motion features characterizing activity patterns. Model development was conducted on a cohort of 78 individuals, yielding strong discrimination under nested cross-validation (AUROC = 0.95). Generalization was confirmed on a blinded local test set (n = 31, AUROC = 0.86) and on two independent external cohorts (n = 113, AUROC = 0.84; n = 57, AUROC = 0.94). To assess real-world robustness, leave-one-dataset-out cross-validation across the internal and external cohorts demonstrated consistent performance (AUROC range = 0.84-0.89). A complementary stability analysis showed that key predictive features remained reproducible across datasets, supporting the final pooled multi-center model as a robust pre-trained resource for broader deployment. By being open-source and easy to use, our tool promotes widespread adoption and facilitates independent validation and collaborative improvements, thereby advancing the field toward a unified and generalizable RBD detection model using wearable devices.


4 common sleep myths, debunked

Popular Science

Don't worry, you probably aren't swallowing spiders. Breakthroughs, discoveries, and DIY tips sent every weekday. All of us lay down every night and proceed to be completely unaware of what's happening in the waking world around us as we snooze. It shouldn't be surprising, then, that there are all kinds of myths about sleep. We have inaccurate ideas about what prevents us from sleeping, what helps us sleep, and what happens while we're sleeping.


A Systematic Evaluation of Self-Supervised Learning for Label-Efficient Sleep Staging with Wearable EEG

Estevan, Emilio, Sierra-Torralba, María, López-Larraz, Eduardo, Montesano, Luis

arXiv.org Artificial Intelligence

Abstract--Wearable EEG devices have emerged as a promising alternative to polysomnography (PSG). As affordable and scalable solutions, their widespread adoption results in the collection of massive volumes of unlabeled data that cannot be analyzed by clinicians at scale. Meanwhile, the recent success of deep learning for sleep scoring has relied on large annotated datasets. Self-supervised learning (SSL) offers an opportunity to bridge this gap, leveraging unlabeled signals to address label scarcity and reduce annotation effort. In this paper, we present the first systematic evaluation of SSL for sleep staging using wearable EEG. We investigate a range of well-established SSL methods and evaluate them on two sleep databases acquired with the Ikon Sleep wearable EEG headband: BOAS, a high-quality benchmark containing PSG and wearable EEG recordings with consensus labels, and HOGAR, a large collection of home-based, self-recorded, and unlabeled recordings. Three evaluation scenarios are defined to study label efficiency, representation quality, and cross-dataset generalization. Results show that SSL consistently improves classification performance by up to 10% over supervised baselines, with gains particularly evident when labeled data is scarce. SSL achieves clinical-grade accuracy above 80% leveraging only 5% to 10% of labeled data, while the supervised approach requires twice the labels. Additionally, SSL representations prove robust to variations in population characteristics, recording environments, and signal quality . Our findings demonstrate the potential of SSL to enable label-efficient sleep staging with wearable EEG, reducing reliance on manual annotations and advancing the development of affordable sleep monitoring systems.


NAP: Attention-Based Late Fusion for Automatic Sleep Staging

Rossi, Alvise Dei, van der Meer, Julia, Schmidt, Markus H., Bassetti, Claudio L. A., Fiorillo, Luigi, Faraci, Francesca

arXiv.org Artificial Intelligence

Polysomnography signals are highly heterogeneous, varying in modality composition (e.g., EEG, EOG, ECG), channel availability (e.g., frontal, occipital EEG), and acquisition protocols across datasets and clinical sites. Most existing models that process polysomnography data rely on a fixed subset of modalities or channels and therefore neglect to fully exploit its inherently multimodal nature. We address this limitation by introducing NAP (Neural Aggregator of Predictions), an attention-based model which learns to combine multiple prediction streams using a tri-axial attention mechanism that captures temporal, spatial, and predictor-level dependencies. NAP is trained to adapt to different input dimensions. By aggregating outputs from frozen, pretrained single-channel models, NAP consistently outperforms individual predictors and simple ensembles, achieving state-of-the-art zero-shot generalization across multiple datasets. While demonstrated in the context of automated sleep staging from polysomnography, the proposed approach could be extended to other multimodal physiological applications.